Exploration and Exploitation in Parkinson’s Disease: Computational Analyses

Authors
Affiliations

Björn Meder

Health and Medical University, Potsdam, Germany

Martha Sterf

Medical School Berlin, Berlin, Germany

Charley M. Wu

University of Tübingen, Tübingen, Germany

Matthias Guggenmos

Health and Medical University, Potsdam, Germany

Published

August 1, 2025

Code
# Housekeeping: Load packages and helper functions
# Housekeeping
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(message = FALSE)
knitr::opts_chunk$set(warning = FALSE)
knitr::opts_chunk$set(fig.align='center')

options(knitr.kable.NA = '')

packages <- c('gridExtra', 'BayesFactor', 'tidyverse', "RColorBrewer", "lme4", "sjPlot", "lsr", "brms", "kableExtra", "afex", "emmeans", "viridis", "ggpubr", "hms", "scales", "cowplot", "waffle", "ggthemes", "parameters", "rstatix", "magick", "grid")

installed <- packages %in% rownames(installed.packages())
if (any(!installed)) {
  install.packages(packages[!installed])
}

# Load all packages
lapply(packages, require, character.only = TRUE)

set.seed(0815)

# file with various statistical functions, among other things it provides tests for Bayes Factors (BFs)
source('statisticalTests.R')

# Wrapper for brm models such that it saves the full model the first time it is run, otherwise it loads it from disk
run_model <- function(expr, modelName, path='brm', reuse = TRUE) {
  path <- paste0(path,'/', modelName, ".brm")
  if (reuse) {
    fit <- suppressWarnings(try(readRDS(path), silent = TRUE))
  }
  if (is(fit, "try-error")) {
    fit <- eval(expr)
    saveRDS(fit, file = path)
  }
  fit
}


# Setting some plotting params
w_box          <- 0.2      # width of boxplot, also used for jittering points and lines    
line_jitter    <- w_box / 2
xAnnotate      <- -0.3

# jitter params
jit_height  <- 0.01
jit_width   <- 0.05
jit_alpha   <- 0.6

# colors for age groups
groupcolors    <- c("#7570b3", "#1b9e77", "#d95f02")
choice3_colors <- c("#e7298a", "#66a61e", "#e6ab02")
Code
########################################################
# get behavioral data
########################################################
dat_gridsearch <- read_csv("data/data_gridsearch_Parkinson.csv", show_col_types = FALSE) %>% 
  mutate(type_choice  = factor(type_choice, levels = c("Repeat", "Near", "Far"))) 

# normalize reward and previous reward
dat_gridsearch$z = dat_gridsearch$z / 50
dat_gridsearch$previous_reward = dat_gridsearch$previous_reward / 50

########################################################
# get subject data
########################################################
dat_sample <- read_delim("data/data_gridsearch_subjects.csv", escape_double = FALSE, trim_ws = TRUE, show_col_types = FALSE) %>% 
  mutate(gender = as.factor(gender),
        group = fct_recode(group,
                       "Control" = "PNP",
                       "PD+"     = "PD+",
                       "PD-"     = "PD-"),
    group = fct_relevel(group, "Control", "PD+", "PD-")
  ) %>% 
  mutate(last_ldopa = if_else(group != "Control", as_hms(last_ldopa), as_hms(NA)),
         next_ldopa = if_else(group != "Control", as_hms(next_ldopa), as_hms(NA)),
         time_exp = if_else(group != "Control", as_hms(time_exp), as_hms(NA))) %>% 
  mutate(time_since_ldopa = as.numeric(time_exp - last_ldopa, unit = "mins"))


dat <- dat_sample %>% 
  left_join(dat_gridsearch, by = "id") %>% 
  arrange(group)

########################################################
# get modeling data
########################################################
modelFits <- read.csv('modelResults/modelFit.csv') # generated by dataProcessing_gridSearchParkinson.R
# length(unique(modelFits$id))

1 Computational Analyses

Complementing the behavioral analyses, we study exploration and exploitation in PD through the lens of a computational model, the Gaussian Process Upper Confidence Bound (GP-UCB) model. This model integrates similarity-based generalization with two distinct exploration mechanisms: directed exploration, which seeks to reduce uncertainty about rewards, and random exploration, which adds stochastic noise to the search process without being directed towards a particular goal (Wu et al., 2018; Wu et al., 2025). In previous research using the same paradigm, this model has provided the best account of human behavior and enabled the decomposition of exploration into distinct mechanisms (Giron et al., 2023; Meder et al., 2021; Schulz et al., 2019; Wu et al., 2018; Wu et al., 2020).

1.1 Gaussian Process Upper Confidence Bound (GP-UCB) Model

The GP-UCB model comprises three components:

  1. a learning model, which uses Bayesian inference to generate predictions about the rewards associated with each option (tile),
  2. a sampling strategy, which uses reward expectations and associated uncertainty to evaluate how promising each option is, and
  3. a choice rule, which converts options’ values into choice probabilities.
Note

Add details

1.1.1 Learning Model

1.1.2 Sampling Strategy

1.1.3 Choice rule

1.1.4 Model parameters

Associated with each model component is a free parameter that we estimate through out-of-sample cross validation. These parameters provide a window into distinct aspects of learning and exploration:

  1. The length-scale parameter \(\lambda\) of the RBF kernel captures how strongly a participant generalizes based on the observed evidence, i.e., the rewards obtained from previous choices.
  2. The uncertainty bonus \(\beta\) represents to the level of directed exploration, i.e., how much expected rewards are inflated through an “uncertainty bonus”.
  3. The temperature parameter \(\tau\) corresponds to the amount of sampling noise, i.e., extent of random exploration.

2 Model comparison

We tested the GP-UCB model in its ability to model learning and predicting each participants’ search and decision-making behavior. To assess the contribution of each component of the model (generalization, uncertainty-directed exploration, and random exploration) we compare the predictive accuracy of the GP-UCB model to model variants where we lesion away each component.

\(\lambda\) lesion model: This model removes the ability to generalize, meaning that all options are learned independently (via Bayesian mean tracker)

\(\beta\) lesion model: No uncertainty-directed exploration (\(\beta=0\)), i.e., options are valued solely based on reward expectations (mean greedy)

\(\tau\) lesion model: Exchanges the softmax choice rule with an \(\epsilon\)-greedy policy as an alternative random exploration mechanism. With probability \(\epsilon\), a random option is selected (each with probability 1/64); with probability 1 − \(\epsilon\), the option with the highest UCB value is chosen. The parameter \(\epsilon\) is estimated for each participant.

All models were fitted using leave-one-round-out cross-validation based on maximum likelihood estimation. Model fits are evaluated using the sum of negative log-likelihoods across all out-of-sample predictions.

Models’ predictive accuracy was assessed using a pseudo-\(R^2\) measure, based on the sum of negative log-likelihoods across all out-of-sample predictions. The summed log loss is compared to a random model, such that \(R^2=0\) corresponds to chance performance and \(R^2=1\) corresponds to theoretically perfect predictions.

\[ R^2 = 1 - \frac{\log \mathcal{L}(M_k)}{\log\mathcal{L}(M_{rand})}, \]

Code
# get subject information
groupDF <-dat_sample %>%
  select(id, age, gender,group,BDI,MMSE,hoehn_yahr,last_ldopa,next_ldopa,time_exp,time_since_ldopa) %>%
  group_by(id) %>%
  slice_head(n = 1) %>%
  arrange(group)
  
# groupDF <- dat %>% 
#   group_by(id) %>%
#   slice(1) %>%
#   ungroup()

# length(unique(dat$id))
# length(unique(groupDF$id))

modelFits <- merge(modelFits, groupDF[,c('id', 'group')], by = "id") # merge to add group 

# kernels <- c("RBF", "BMT") # RBF = Radial Basis Function kernel, BMT= Bayesian Mean Tracker
# acqFuncs <- c("GM", "UCB", "EG") # UCB = Upper Confidence Bound, GM=greedyMean, EG = epsilonGreedy
modelFits <-  modelFits %>%
  mutate(kernel=factor(kernel, levels=c('RBF', 'BMT'), labels=c('GP', 'BMT'))) %>%
  mutate(acq=factor(acq, levels=c('UCB', 'GM','epsilonGreedy'), labels=c('UCB', 'meanGreedy', 'epsilonGreedy')))

modelFits$ModelName = paste(modelFits$kernel, modelFits$acq, sep="-")

# Only include key comparisons
modelFits <- subset(modelFits, ModelName %in% c("GP-UCB", "BMT-UCB", "GP-meanGreedy", "GP-epsilonGreedy" ))
modelFits$ModelName = factor(modelFits$ModelName, levels = c('GP-UCB', 'BMT-UCB', 'GP-meanGreedy', 'GP-epsilonGreedy'))

#Two line name for models
modelFits$shortname <- factor(modelFits$ModelName, labels = c('GP\nUCB', 'lambda\nlesion', 'beta\nlesion', 'tau\nlesion'))
levels(modelFits$shortname) <- c('GP\nUCB', 'lambda\nlesion', 'beta\nlesion', 'tau\nlesion')

# perform frequentist and Bayesian t-tests and make labels for plotting 
comparisons_df <- modelFits %>%
  group_by(group) %>%
  group_modify(~{
    # pariwise t-tests
    comparisons <- list(
      c("GP\nUCB", "lambda\nlesion"),
      c("GP\nUCB", "beta\nlesion"),
      c("GP\nUCB", "tau\nlesion")
    )
    
    t_res <- t_test(R2 ~ shortname, 
                    data = .x, 
                    paired = TRUE,
                    comparisons = comparisons) %>%
      add_xy_position(x = "shortname") %>%
      mutate(
        p.format = case_when(
          p < 0.001 ~ "p<.001",
          TRUE ~ paste0("p=", signif(p, 2))
        )
      )
    
    # compute Bayes Factors BF10
    t_res$BF <- purrr::pmap_dbl(
      list(t_res$group1, t_res$group2),
      function(g1, g2) {
        x1 <- .x$R2[.x$shortname == g1]
        x2 <- .x$R2[.x$shortname == g2]
        bf <- BayesFactor::ttestBF(x = x1, y = x2, paired = TRUE)
        as.numeric(BayesFactor::extractBF(bf)$bf)
      }
    )
    
    t_res
  }) %>% 
  mutate( # make BF label
    BF.format = case_when(
      BF > 100 ~ "BF>100",
      TRUE ~ paste0("BF=", signif(BF, 2))
    )) %>% 
  mutate(plot_label = paste0(p.format, ", ", BF.format)) # make plot label

# get y.positions from first comaprisons
ref_ypos <- comparisons_df %>%
  filter(group == first(levels(modelFits$group))) %>%
  pull(y.position)

# set manually
ref_ypos <- c(0.68, 0.74, 0.8)

comparisons_df <- comparisons_df %>%
  group_by(group) %>%
  mutate(y.position = ref_ypos) %>%
  ungroup()

p_model_comparison <- ggplot(modelFits, aes(x = shortname, y = R2, fill = group, shape = group, color = group)) +
  facet_wrap(~group, nrow = 1) +
  geom_boxplot(alpha = 0.2, width = 0.4, outlier.shape = NA) +  
  geom_jitter(width = 0.15, size = 2) +  
  stat_summary(fun = mean, geom = "point", shape = 23, fill = "white", size = 4) +
  scale_y_continuous(name = expression("Predictive accuracy " ~ R^2),
                     breaks = c(0, 0.5, 1))+  
  scale_x_discrete("",
                   labels = c(
      "lambda\nlesion" = \nlesion",
    "beta\nlesion"   = \nlesion",
    "tau\nlesion"    = \nlesion"
    # labels = c(
    # "lambda\nlesion" = expression(atop(lambda, lesion)),
    # "beta\nlesion"   = expression(atop(beta, lesion)),
    # "tau\nlesion"    = expression(atop(tau, lesion))
    )) + 
  scale_fill_manual(values = groupcolors) + 
  scale_color_manual(values = groupcolors) + 
  ggtitle("Model comparison: GP-UCB vs. lesioned models") + 
  theme_classic() +
  theme(strip.background = element_blank(),  
        strip.text = element_text(color = "black", size=18),
        legend.position = "none",
        legend.justification = c(0, 1),
        axis.title = element_text(size = 18),
        axis.text = element_text(size = 18),
        plot.title = element_text(size = 24),
        plot.margin = margin(0, 0, 20, 0) # positive bottom margin, otherwise artfecat when putting later together with cowplot
  )

   
p_model_comparison
ggsave("plots/model_comparison.png", p_model_comparison, dpi=300, width = 10, height = 5)
#ggsave("plots/model_comparison.pdf", p_model_comparison, width = 10, height = 5) # ü

# plot for Computational Psychiatry Conference (CPP; Tübingen, July 2025)
# ggboxplot(modelFits, 
#           x = "shortname", 
#           y = "R2",
#           color = "group", palette = groupcolors, fill = "group", alpha = 0.2,
#           add = "jitter", jitter.size = 1, shape = "group",
#           title = "Model comparison: GP-UCB vs. lesioned models") +
#   facet_wrap(~group, nrow = 1) +
#   ylab(bquote(R^2)) +
#   xlab("") +
#   stat_pvalue_manual(
#     filter(comparisons_df, group == "Control"),
#     label = "plot_label",
#     tip.length = 0.01, bracket.size = 0.3, size = 3
#   ) +
#   stat_pvalue_manual(
#     filter(comparisons_df, group == "PD+"),
#     label = "plot_label",
#     tip.length = 0.01, bracket.size = 0.3, size = 3
#   ) +
#   stat_pvalue_manual(
#     filter(comparisons_df, group == "PD-"),
#     label = "plot_label",
#     tip.length = 0.01, bracket.size = 0.3, size = 3
#   ) +
#   theme_classic()+
#   theme(strip.background = element_blank(),  
#         strip.text = element_text(color = "black", size=12),
#         legend.position = "none",
#         plot.title = element_text(size = 24),
#         axis.text= element_text(colour="black", size = 14),
#         axis.title= element_text(colour="black", size = 14)
#   ) 
# 
# ggsave("plots/model_comparison_CPP.png", p_model_comparison_CPP, dpi=300, width = 9, height = 5)
Figure 1: Predictive accuracy of GP-UCB model and lesioned variants.

2.1 Model comparison: Control

  • GP-UCB vs. lambda lesion: \(t(33)=3.0\), \(p=.005\), \(d=0.2\), \(BF=7.5\)
  • GP-UCB vs. beta lesion: \(t(33)=3.4\), \(p=.002\), \(d=0.2\), \(BF=19\)
  • GP-UCB vs. tau lesion: \(t(33)=7.7\), \(p<.001\), \(d=0.6\), \(BF>100\)

2.2 Model comparison: PD+

  • GP-UCB vs. lambda lesion: \(t(32)=3.4\), \(p=.002\), \(d=0.4\), \(BF=20\)
  • GP-UCB vs. beta lesion: \(t(32)=3.7\), \(p<.001\), \(d=0.4\), \(BF=40\)
  • GP-UCB vs. tau lesion: \(t(32)=8.5\), \(p<.001\), \(d=0.9\), \(BF>100\)

2.3 Model comparison: PD-

  • GP-UCB vs. lambda lesion: \(t(30)=3.6\), \(p=.001\), \(d=0.7\), \(BF=27\)
  • GP-UCB vs. beta lesion: \(t(30)=5.4\), \(p<.001\), \(d=1.1\), \(BF>100\)
  • GP-UCB vs. tau lesion: \(t(30)=4.9\), \(p<.001\), \(d=1.0\), \(BF>100\)

3 Model-based classification of participants

Code
# classify participants according to model R^2
df_participant_classification <- modelFits %>%
  group_by(id) %>%
  slice_max(order_by = R2, n = 1) %>%
  select(id, group, ModelName, shortname, R2) %>% 
  ungroup() %>% 
  rename(best_ModelName = ModelName,
         best_shortname = shortname,
         best_R2 = R2)

df_counts <- df_participant_classification %>%
  count(group, best_shortname)

df_percent <- df_counts %>%
  group_by(group) %>%
  mutate(
    total_in_group = sum(n),
    percent = round((n / total_in_group) * 100, 1)
  ) %>%
  ungroup()

# add most predictive model for each subject to df modelFits
modelFits <- modelFits %>% 
  left_join(df_participant_classification, by = c("id", "group"))

We classified participants based on which model achieved the highest cross-validated predictive accuracy (highest \(R^2\); ?@fig-participant_classification). In each patient group, the GP-UCB model was the most rpedictive model for the majority of participants (Control: 55.9%, PD+: 57.6%, PD-: 58.1%).

In total, out of 98 participants, 56 (57.1%) were best described by the GP-UCB model, 22 (22.4%) by the lambda lesion model, 13 (13.3%) by the beta lesion model, and 7 (7.1%) by the tau lesion model. The results suggest that all three components of the GP-UCB model are relevant for predicting participants’ behavior.

Code
# waffle plot
p_classification_participants <- ggplot(
  data = df_counts, 
  aes(fill=best_shortname, values=n)
) +
  geom_waffle(
    color = "white", 
    size = 1, 
    n_rows = 5
  ) +
  facet_wrap(~group, nrow=1) +
  scale_x_discrete(
    expand = c(0,0,0,0)
  ) +
  scale_y_discrete(
    expand = c(0,0,0,0)
  ) +
  ggthemes::scale_fill_tableau(name=NULL) +
  coord_equal() +
  ggtitle ("Model comparison: Participant classification") +
theme_classic() +
  theme(
    legend.title = element_blank(),
    plot.title = element_text(size = 24),
    legend.position = 'right',
    strip.text = element_text(color = "black", size=18),
    legend.text =  element_text(colour="black", size=18),
    text = element_text(colour = "black"),
    strip.background =element_blank(),
    axis.text= element_text(colour="black", size = 18),
    panel.grid.major = element_blank(),
    panel.grid.minor = element_blank(),
    panel.spacing = unit(3, "lines"),
    legend.key.spacing.y = unit(0.6, "cm"),
    plot.margin = margin(-70, 0, -20, 0) # negativ bottom margin, otherwise artfecat when putting later together with cowplot
    )

ggsave("plots/participant_classification.png", p_classification_participants, width = 12, height = 5, dpi=300)

4 Analysis of parameter estimates

Code
df_gpucb_params <- modelFits %>% filter(kernel=='GP' & acq == 'UCB') %>% 
  pivot_longer(c('lambda', 'beta', 'tau'), names_to = 'param', values_to = 'estimate') %>% 
  mutate(param = factor(param, levels = c('lambda', 'beta', 'tau'))) %>% 
  mutate(estimate_log10 = log10(estimate))


df_gpucb_params %>%
  group_by(group, param) %>%
  summarise(
    mean = mean(estimate, na.rm = TRUE),
    median = median(estimate, na.rm = TRUE),
    se = sd(estimate, na.rm = TRUE) / sqrt(n()),
    ci_lower = mean - 1.96 * se,
    ci_upper = mean + 1.96 * se,
    .groups = "drop"
  ) %>%
  mutate(
    summary = sprintf("M = %.2f [%.2f, %.2f], Mdn = %.2f", mean, ci_lower, ci_upper, median)
  ) %>%
  select(group, param, summary) %>%
  pivot_wider(names_from = param, values_from = summary) %>%
  kable(caption = "Mean (95% CI) and median parameter estimates by group", format = "html") %>% 
  kable_styling("striped", full_width = FALSE)
Table 1: Mean (95% CI) and median parameter estimates of the GP-UCB model by group.
Mean (95% CI) and median parameter estimates by group
group lambda beta tau
Control M = 0.65 [0.57, 0.73], Mdn = 0.59 M = 2.36 [-0.64, 5.35], Mdn = 0.37 M = 0.65 [-0.40, 1.71], Mdn = 0.04
PD+ M = 0.56 [0.50, 0.62], Mdn = 0.53 M = 1.99 [-0.22, 4.20], Mdn = 0.37 M = 0.42 [-0.11, 0.94], Mdn = 0.04
PD- M = 0.54 [0.46, 0.63], Mdn = 0.46 M = 10.22 [3.77, 16.68], Mdn = 0.55 M = 1.35 [0.20, 2.50], Mdn = 0.05
Code
p_gpucb_params <- ggplot(df_gpucb_params, aes(x = group, y = estimate, fill = group, shape = group, color = group)) +
  # facet_wrap(~param, nrow = 1) +
  facet_wrap(
    ~ param,
    labeller = labeller(
      param = c(
        "lambda" = "'Generalization '*lambda",
        "beta"   = "'Exploration bonus '*beta",
        "tau"    = "'Random exploration '*tau"
      ),
      .default = label_parsed  
  )) +
  scale_y_log10(name = "Estimate (log scale)", breaks = c(0.01, 0.1, 1, 10, 100), labels = c("0.01", "0.1", "1", "10", "100")) +
  geom_boxplot(alpha = 0.2, width = 0.4, outlier.shape = NA) +  
  geom_jitter(width = 0.15, size = 2) +  
  stat_summary(fun = mean, geom = "point", shape = 23, fill = "white", size = 4) +
  scale_color_manual(values = groupcolors) +
  scale_fill_manual(values = groupcolors) +
  scale_x_discrete("") + 
  ggtitle("GP-UCB parameter estimates: Group differences") + 
  theme_classic() +
  theme(strip.background = element_blank(),  
        strip.text = element_text(color = "black", size=18),
        legend.position = "none",
        legend.justification = c(0, 1),
        axis.title = element_text(size = 18),
        axis.text = element_text(size = 18),
       # plot.margin = margin(0, 0, 0, 0),
        plot.title = element_text(size = 24),
       plot.margin = margin(0, 0, 20, 0) # negativ bottom margin, otherwise artfecat when putting later together with cowplot
  )

p_gpucb_params  
ggsave("plots/GP-UCB_params.png", p_gpucb_params,dpi=300, width = 9, height = 5)

# plot for Computational Psychiatry Conference (CPP; Tübingen, July 2025)
# 
# # Define your comparisons
# comparisons <- list(c("PD+", "PD-"), c("Control", "PD+"), c("Control", "PD-"))
# 
# # Extract function for p and BF from ttestPretty output
# # TO DO: Cumbersome via 
# extract_p_and_bf <- function(tt_string) {
#   matches <- stringr::str_match_all(tt_string, "\\$p=([^$]+)\\$|\\$BF=([^$]+)\\$")
#   flat <- unlist(matches)
#   
#   raw_vals <- flat[!is.na(flat) & grepl("^\\.?\\d+", flat)]
#   
#   # Convert to numeric and round to 2 decimal places
#   nums <- signif(as.numeric(raw_vals), 2)
#   
#   # Format with 2 decimal digits (or scientific if very small/large)
#   p_fmt <- formatC(nums[1], digits = 2, format = "f")
#   bf_fmt <- formatC(nums[2], digits = 2, format = "f")
#   
#   paste0("p=", p_fmt, ", BF=", bf_fmt)
# }
# 
# # Loop over each param and each comparison
# comparisons_df <- df_gpucb_params %>%
#   group_by(param) %>%
#   group_modify(~{
#     comparisons <- list(
#       c("Control", "PD+"),
#       c("Control", "PD-"),
#       c("PD+", "PD-")
#     )
#     
#     # For each pairwise group comparison
#     res <- purrr::map_dfr(comparisons, function(groups) {
#       g1 <- groups[1]
#       g2 <- groups[2]
#       
#       x1 <- .x$estimate[.x$group == g1]
#       x2 <- .x$estimate[.x$group == g2]
#       
#       if (length(x1) < 2 || length(x2) < 2) {
#         return(tibble(
#           group1 = g1,
#           group2 = g2,
#           p = NA_real_,
#           BF = NA_real_,
#           y.position = NA_real_
#         ))
#       }
#       
#       # Frequentist test
#       t_res <- t.test(x1, x2, paired = FALSE, var.equal = TRUE)
#       p_val <- t_res$p.value
#       
#       # Bayes Factor
#       bf <- BayesFactor::ttestBF(x = x1, y = x2, paired = FALSE)
#       bf_val <- as.numeric(BayesFactor::extractBF(bf)$bf)
#       
#       # y-position (max value in current param group × offset)
#       y_max <- max(.x$estimate, na.rm = TRUE)
#       y_pos <- y_max * runif(1, 1.05, 1.15)
#       
#       tibble(
#         group1 = g1,
#         group2 = g2,
#         p = p_val,
#         BF = bf_val,
#         y.position = y_pos
#       )
#     })
#     
#     res
#   }) %>%
#   ungroup() %>%
#   mutate(
#     p.format = case_when(
#       p < 0.001 ~ "p<.001",
#       is.na(p) ~ NA_character_,
#       TRUE ~ paste0("p=", signif(p, 2))
#     ),
#     BF.format = case_when(
#       is.na(BF) ~ NA_character_,
#       BF > 100 ~ "BF>100",
#       TRUE ~ paste0("BF=", signif(BF, 2))
#     ),
#     plot_label = paste0(p.format, ", ", BF.format)
#   )
# 
# comparisons_df$y.position <- rep(c(1.8, 2.6, 2.2), 3)
# 
# p_GP_UCB_params_CPP <-  
#   ggboxplot(df_gpucb_params, 
#           x = "group", 
#           y = "estimate",
#           color = "group", palette =groupcolors, fill = "group", alpha = 0.2,
#           add = "jitter", jitter.size = 0.5, shape = "group", title = "GP-UCB parameter estimates") +
#   facet_wrap(~param, nrow = 1) +
#   scale_y_log10(breaks = c(0.01, 0.1, 1, 10, 100), labels = c("0.01", "0.1", "1", "10", "100"), expand = expansion(mult = c(0.1, 0.15))  ) +
#   # scale_y_log10( expand = expansion(mult = c(0, 0.1))  ) +
#   # coord_cartesian(ylim = c(0.01,260)) +
#   ylab("Estimate (log scale)") +
#   xlab("") +
#     # ignore p values because of log; only done to position brackets correctly
#  # stat_compare_means(comparisons = list( c("Control", "PD+"), c("PD+", "PD-"), c("Control", "PD-")  ),
#  #                     paired = F,
#  #                     method = "t.test",
#  #                     # label = "p.format",
#  #                     aes(label = paste0("p = ", after_stat(p.format)))
#  #                     #aes(label = paste0(" "))
#  #  ) +
#     stat_pvalue_manual(
#     filter(comparisons_df, param == "lambda"),
#     label = "plot_label",
#     tip.length = 0.01, bracket.size = 0.3, size = 3
#   ) +
#     stat_pvalue_manual(
#     filter(comparisons_df, param == "beta"),
#     label = "plot_label",
#     tip.length = 0.01, bracket.size = 0.3, size = 3
#   ) +
#     stat_pvalue_manual(
#     filter(comparisons_df, param == "tau"),
#     label = "plot_label",
#     tip.length = 0.01, bracket.size = 0.3, size = 3
#   ) +
#   stat_summary(fun = mean, geom="point", shape = 23, fill = "white", size=2) +
#   theme_classic() +
#   theme(strip.background = element_blank(),  
#         strip.text = element_text(color = "black", size=14),
#         legend.position = "none",
#         plot.title = element_text(size = 20),
#         axis.text= element_text(colour="black", size = 12),
#         axis.title= element_text(colour="black", size = 12),
#         panel.spacing = unit(3, "lines") 
#   )
# 
# ggsave("plots/GP-UCB_params_CPP.png", p_GP_UCB_params_CPP, dpi = 300, width = 8, height = 5)
# ggsave("plots/GP-UCB_params_CPP.pdf", p_GP_UCB_params_CPP, width = 8, height = 5)
Figure 2: Parameter estimates of GP-UCB model, estimated through leave-one-round-out cross validation. Each dot is one participant.

To better understand the mechanisms underlying the observed behavioral differences, we analyzed the parameters of the Gaussian Process Upper Confidence Bound (GP-UCB) model (Figure 2).

4.0.1 Generalization \(\lambda\)

The parameter \(\lambda\) represents the length-scale in the RBF kernel, which governs the amount of generalization, i.e., to what extent participants assume a spatial correlation between options (higher \(\lambda\) = stronger generalization). Overall, the amount of generalization was very similar between groups.

  • Control vs. PD+: \(U=678\), \(p=.145\), \(r_{ au}=.15\), \(BF=.65\)
  • Control vs. PD-: \(U=731\), \(p=.007\), \(r_{ au}=.28\), \(BF=3.9\)
  • PD+ vs. PD-: \(U=626\), \(p=.126\), \(r_{ au}=.16\), \(BF=.44\)

4.0.2 Exploration bonus \(\beta\)

The parameter \(\beta\) represents the uncertainty bonus, i.e. how much expected rewards are positively inflated by their uncertainty (higher \(\beta\) = more uncertainty-directed exploration). Controls and PD+ patients on medication did not differ, and both groups had lower beta estimates than the dopamine-depleted patients in the PD− group. These differences suggest that levodopa medication modulated the amount of uncertainty-directed exploration by restoring beta to levels comparable to those observed in controls without PD. This aligns with findings from a restless bandit paradigm, where L-Dopa reduced the amount of directed exploration in healthy volunteers, while the level of random exploration remained unaffected (Chakroun et al., 2020).

  • Control vs. PD+: \(U=480\), \(p=.315\), \(r_{ au}=-.10\), \(BF=.48\)
  • Control vs. PD-: \(U=188\), \(p<.001\), \(r_{ au}=-.46\), \(BF=81\)
  • PD+ vs. PD-: \(U=220\), \(p<.001\), \(r_{ au}=-.41\), \(BF=25\)

4.0.3 Random exploration \(\tau\)

The parameter \(\tau\) represents the amount of decision noise, i.e. stochastic variability in the softmax decision rule (lower \(\tau\) = more decision noise, i.e. more uniform distribution; conversely, \(\tau \rightarrow \infty \quad \Rightarrow \quad \text{argmax (greedy)}\)). There were no group differences in terms of the amount of random exploration.

  • Control vs. PD+: \(U=572\), \(p=.896\), \(r_{ au}=.01\), \(BF=.25\)
  • Control vs. PD-: \(U=500\), \(p=.730\), \(r_{ au}=-.04\), \(BF=.27\)
  • PD+ vs. PD-: \(U=470\), \(p=.584\), \(r_{ au}=-.06\), \(BF=.28\)

5 Correlation of model parameters with performance

To analyze relationships between parameter estimates and performance, we computed the Kendall correlation \(\tau\) between each parameter and the obtained mean reward.

Code
# mean reward per subject across all trials and rounds (practice and bonus round excluded)
df_mean_reward_subject <- dat %>% 
  filter(trial != 0 & round %in% 2:9) %>% # exclude first (randomly revealed) tile and practice round and bonus round
  group_by(id) %>% 
  summarise(group = first(group),
            sum_reward = sum(z),
            mean_reward = mean(z), 
            sd_reward = sd(z)) 

df_params_performance <- df_gpucb_params %>% 
  left_join(df_mean_reward_subject, by = c("id", "group"))

df_params_performance_wide <- df_gpucb_params %>% 
  pivot_wider(names_from = param, values_from = estimate ) %>% 
  left_join(df_mean_reward_subject, by = c("id", "group"))

The amount of generalization was positively related with obtained rewards, showing that participants who successfully learned about the spatially correlation of rewards were performing better. By contrast, both the uncertainty bonus \(\beta\) and the amount of random temperature \(\tau\) were negatively related to obtained rewards, showing how to much exploration can impair succesull navigation of the explore-exploit trade-off.

Code
# plot correlation lambda and reward 
p_lambda_reward <- 
  ggplot(subset(df_params_performance, param == "lambda"), aes(x = estimate, y = mean_reward)) +
  geom_point(aes(color = group, shape = group, fill = group)) +
  #geom_point() +
  geom_smooth(method = "lm", color = "black", linetype = "dashed", fill = "lightgray", se = TRUE) +
  stat_cor(method = "kendall", cor.coef.name = "tau", label.x = 0.8,  label.y = 0.4, size=5, p.accuracy=0.001) +
  labs(
    title = expression("Generalization " * lambda),
    x = "Estimate (log)",
    y = "Mean normalized reward"
  ) +
  # scale_x_log10(name = "Estimate (log scale)", breaks = c(0.1, 1, 10), labels = c("0.1", "1", "10"), limits = c(0.1, 2)) +
  scale_x_continuous(name = "Estimate", breaks = c(0, 1, 2), labels = c("0", "1", "2"), limits = c(0, 1.45)) +
  scale_y_continuous(breaks = seq(0, 1, .1), limits = c(0.4, 0.85)) +
  scale_fill_manual(values = groupcolors) + 
  scale_color_manual(values = groupcolors) +
  theme_classic() +
  theme(
    plot.title = element_text(hjust = 0.5, size=18),
    legend.title = element_blank(),
    axis.title = element_text(color = "black", size = 14),
    axis.text = element_text(color = "black", size = 14),
    legend.position = c(0.12, 0.15), #bottom: 0.15 top:0.9
    legend.text = element_text(size = 14),
    legend.key.size = unit(0.8, "lines"),
    legend.margin = margin(0, 0, 0, 0),       
  legend.box.margin = margin(0, 0, 0, 0),
  legend.key = element_rect(fill = NA, colour = NA)
  )

ggsave("plots/cor_lambda_reward.png", p_lambda_reward, dpi = 300, width = 8, height = 5)

# plot correlation between exploration bonus beta (log10) and reward  
p_beta_reward <-
  ggplot(subset(df_params_performance, param == "beta"), aes(x = estimate, y = mean_reward)) +
  geom_point(aes(color = group, shape = group, fill = group)) +
  # geom_point() +
  geom_smooth(method = "lm", color = "black", linetype = "dashed", fill = "lightgray", se = TRUE) +
  stat_cor(method = "kendall", cor.coef.name = "tau", label.x = c(0),  label.y.npc = c("top"), p.accuracy=0.001, size=5) +
  labs(
    title = expression("Exploration bonus " * beta),
    x = "Estimate (log)",
    y = " "
  ) +
  scale_x_log10(name = "Estimate (log scale)",  breaks = c(0.01, 0.1, 1, 10), labels = c("0.01","0.1", "1", "10"), limits = c(0.005, 70)) +
  scale_y_continuous(breaks = seq(0, 1, .1), limits = c(0.4, 0.85)) +
  scale_fill_manual(values = groupcolors) + 
  scale_color_manual(values = groupcolors) +
  theme_classic() +
  theme(
    plot.title = element_text(hjust = 0.5, size=18),
    legend.title = element_blank(),
    axis.title = element_text(color = "black", size = 14),
    axis.text = element_text(color = "black", size = 14),
    legend.position = "none,"
    # legend.position = c(0.1, 0.2),
    # legend.text = element_text(size = 14),
    # legend.key.size = unit(1.5, "lines")  
  )

ggsave("plots/cor_beta_reward.png", p_beta_reward, dpi = 300, width = 8, height = 5)


# plot correlation between amount of random exploration (temperature tau of softmax choice rule) and reward  
 p_tau_reward <-
  ggplot(subset(df_params_performance, param == "tau"), aes(x = estimate, y = mean_reward)) +
  geom_point(aes(color = group, shape = group, fill = group)) +
  # geom_point() +
  geom_smooth(method = "lm", color = "black", linetype = "dashed", fill = "lightgray", se = TRUE) +
  stat_cor(method = "kendall", cor.coef.name = "tau", label.x = c(-0.1),  label.y.npc = c("top"), p.accuracy=0.01, size=5) +
  labs(
    title = expression("Random exploration " * tau),
    x = "Estimate (log)",
    y = " "
  ) +
  scale_x_log10(name = "Estimate (log scale)",  breaks = c(0.01, 0.1, 1, 10), labels = c("0.01","0.1", "1", "10"), limits = c(0.005, 20)) +
  scale_y_continuous(breaks = seq(0, 1, .1), limits = c(0.4, 0.85)) +
  scale_fill_manual(values = groupcolors) + 
  scale_color_manual(values = groupcolors) +
  theme_classic() +
  theme(
    plot.title = element_text(hjust = 0.5, size=18),
    legend.title = element_blank(),
    axis.title = element_text(color = "black", size = 14),
    axis.text = element_text(color = "black", size = 14),
    legend.position = "none"
  )

ggsave("plots/cor_tau_reward.png", p_tau_reward, dpi = 300, width = 8, height = 5)

# put plots together
p_parameters_reward <- plot_grid(
  p_lambda_reward, p_beta_reward, p_tau_reward,
  nrow = 1,
  labels = NULL
)

# add title
p_parameters_reward <- grid.arrange(
  grobs = list(p_lambda_reward, p_beta_reward, p_tau_reward),
  nrow = 1,
  top = textGrob(
    "         GP-UCB parameters: Correlation with performance", 
    x = 0,            
    hjust = 0,        
    gp = gpar(fontsize = 24)
  ),
  padding = unit(0.5, "lines") 
)

ggsave("plots/p_parameters_reward.png", p_parameters_reward, dpi = 300, width = 12, height = 4)
 
# # plot correlation between parameter estimates and mean reward, with inset for smaller estimate range 
# main_plot <- ggscatter(df_params_performance, x = "estimate", y = "mean_reward",
#                        add = "reg.line",  
#                        add.params = list(color = "darkred", fill = "lightgray"), 
#                        conf.int = TRUE 
# ) +
#   facet_wrap(~param, scales = "free_x") +
#   stat_cor(method = "pearson", label.x = c(0,3,3), label.y = 45) +
#   ggtitle("Correlation between GP-UCB parameters and reward") +
#   scale_y_continuous("Mean reward", breaks = seq(0,45,10)) +
#   xlab("Estimate") +
#   theme_classic() +
#   theme(strip.background = element_blank(),  
#         strip.text = element_text(color = "black", size=12),
#         legend.title = element_blank(),
#         axis.title = element_text(color = "black", size=14),
#         axis.text = element_text(color = "black", size=14))
# 
# # Inset-Plot für beta (nur Werte 0-1)
# inset_beta <- 
#   ggscatter(df_params_performance %>% filter(param == "beta" & estimate > 0 & estimate <= 1), 
#             x = "estimate", y = "mean_reward",
#             add = "reg.line",  
#             add.params = list(color = "darkred", fill = "lightgray"), 
#             conf.int = TRUE 
#   ) +
#   stat_cor(method = "pearson", label.x = 0.1, label.y = 42, size = 3) +
#   scale_x_continuous( breaks = c(0,0.25, 0.5, 0.75), labels = c("0.0", "0.25", "0.5", "0.75")) +# breaks = seq(0,1,0.1),
#   theme_classic() +
#   theme(axis.title = element_blank(),
#         strip.background = element_blank(), 
#         strip.text = element_blank(), 
#         legend.position = "none")
# 
# # Inset-Plot für tau (nur Werte 0-1)
# inset_tau <- 
#   ggscatter(df_params_performance %>% filter(param == "tau" & estimate > 0 & estimate <= 1), 
#             x = "estimate", y = "mean_reward",
#             add = "reg.line",  
#             add.params = list(color = "darkred", fill = "lightgray"), 
#             conf.int = TRUE 
#   ) +
#   stat_cor(method = "pearson", label.x = 0.05, label.y = 42, size = 3) +
#   scale_x_continuous(breaks = seq(0,1,0.1)) +
#   theme_classic() +
#   theme(axis.title = element_blank(),strip.background = element_blank(), strip.text = element_blank(), legend.position = "none")
# 
# 
# # Hauptplot mit Insets für beta und tau
# p_GP_UCB_params_cor_reward_inset <- 
#   ggdraw(main_plot) +
#   draw_plot(inset_beta, x = 0.5, y = 0.45, width = 0.15, height = 0.35) +
#   draw_plot(inset_tau, x = 0.8, y = 0.45, width = 0.15, height = 0.35)
# 
# p_GP_UCB_params_cor_reward_inset
# 
# ggsave("plots/GP-UCB_params_cor_reward.png", p_GP_UCB_params_cor_reward_inset, width = 12, height= 4, dpi=300)
Figure 3: Correlation of GP-UCB parameters with obtained mean reward across all trials and rounds. Each dot is one participant. The insets show the correlations for a restricted parameter range from 0 to 1.

Correlation (Kendall’s tau \(r_{\tau}\), because it’s invariant against log transformation) of parameter estimates with performance (mean reward):

5.1 Generalization \(\lambda\)

Overall, the extent of generalization was positively related to performance, suggesting that participants who stronger generalized obtained more rewards:

  • Overall: \(r_{ au}=.26\), \(p<.001\), \(BF>100\)

Analysis of parameter estimates on the group level showed that this overall relation was primarily driven by PD+ patients, who showed a strong relation, whereas there was no relation in controls or PD- patients:

  • Control: \(r_{ au}=.13\), \(p=.288\), \(BF=.39\)
  • PD+: \(r_{ au}=.45\), \(p<.001\), \(BF>100\)
  • PD-: \(r_{ au}=-.01\), \(p=.973\), \(BF=.23\)

5.2 Exploration bonus \(\beta\)

The exploration bonus \(\beta\) driving uncertainty-directed correlation was negatively related to performance, suggesting that participants who explore too much at the cost of exploiting known high-value options achieve lower performance:

  • Overall: \(r_{ au}=-.59\), \(p<.001\), \(BF>100\)

Analysis of parameter estimates on the group level showed that this overall relation was primarily driven by PD+ patients, who showed a strong relation, whereas there was no relation in controls or PD- patients:

  • Control: \(r_{ au}=-.43\), \(p<.001\), \(BF>100\)
  • PD+: \(r_{ au}=-.61\), \(p<.001\), \(BF>100\)
  • PD-: \(r_{ au}=-.60\), \(p<.001\), \(BF>100\)

5.3 Random exploration \(\tau\)

The temperature parameter of the softmax choice rule \(\tau\), representig random exploration, was not related to performance, suggesting that participants who explore too much at the cost of exploiting known high-value options achieve lower performance:

  • Overall: \(r_{ au}=-.07\), \(p=.308\), \(BF=.22\)

Analysis of parameter estimates on the group level showed that this overall relation was primarily driven by PD+ patients, who showed a strong relation, whereas there was no relation in controls or PD- patients:

  • Control: \(r_{ au}=-.02\), \(p=.860\), \(BF=.23\)
  • PD+: \(r_{ au}=.09\), \(p=.451\), \(BF=.30\)
  • PD-: \(r_{ au}=-.23\), \(p=.077\), \(BF=1.1\)

5.3.1 Further analyses

  1. take ratio of explore/exploit and correlate with model params

6 Appendix

6.1 Big overview figure computational results

Code
# Combine
cowplot::plot_grid(
  p_model_comparison,
  p_classification_participants,
  p_gpucb_params,
  p_parameters_reward,
  ncol = 1,
  # rel_heights = c(1.2, 1, 1),
  labels = c("auto"),
  label_y = 1.02, 
  label_size = 22,
  align = "v",     
  axis = "l"       
)

ggsave("plots/computational_results.png", dpi=300, height=16, width=14)


# # Figure inclduing GP-UCB visulaiztaion
# # get GP-UCB model illustration (Giron et al., 2023, NHB)
# img <- image_read("img/GP-UCB model.png")  
# gimg <- rasterGrob(as.raster(img), interpolate = TRUE)
# 
# # add title
# gp_ucp_model <- ggdraw() +
#   draw_label("Gaussian Process Upper Confidence Bound (GP-UCB) Model", size = 24, x = 0.05, y = 1, hjust = 0, vjust = 1.5) +
#   # draw_grob(gimg, x = 0.5, y = 0.5, width = 1, height = 1)
#   draw_grob(gimg)
# 
# # Combine
# cowplot::plot_grid(
#   gp_ucp_model,
#   p_model_comparison,
#   p_gpucb_params,             
#   ncol = 1,
#   # rel_heights = c(1.2, 1, 1),
#   labels = c("AUTO"),
#   label_size = 22
# )
# 
# ggsave("plots/computational_results.png", dpi=300, height=16, width=15)
Figure 4: Computational results.

7 Model params of participants best explained by GP-UCB model

Code
df_gpucb_params_subset <- modelFits %>% 
  filter(best_ModelName == "GP-UCB") %>% 
  filter(kernel=='GP' & acq == 'UCB') %>% 
  pivot_longer(c('lambda', 'beta', 'tau'), names_to = 'param', values_to = 'estimate') %>% 
  mutate(param = factor(param, levels = c('lambda', 'beta', 'tau'))) 


ggboxplot(df_gpucb_params_subset, 
          x = "group", 
          y = "estimate",
          color = "group", palette =groupcolors, fill = "group", alpha = 0.2,
          add = "jitter", jitter.size = 0.5, shape = "group", title = "GP-UCB parameter estimates") +
  facet_wrap(~param, nrow = 1) +
  scale_y_log10(breaks = c(0.01, 0.1, 1, 10, 100), labels = c("0.01", "0.1", "1", "10", "100")) +
  ylab("Estimate (log scale") +
  xlab("") +
  stat_summary(fun = mean, geom="point", shape = 23, fill = "white", size=2) +
  theme_classic() +
  theme(strip.background = element_blank(),  
        strip.text = element_text(color = "black", size=12),
        legend.position = "none",
        plot.title = element_text(size = 18)
  )
  
ggsave("plots/GP-UCB_params_subset.png", width = 9, height = 5)
Figure 5: Parameter estimates of GP-UCB model, estimated through leave-one-round-out cross validation. Each dot is one participant.Only participants are included who were best described by the GP-UCB model.

8 Session Information

Code
sessionInfo()
R version 4.5.0 (2025-04-11)
Platform: aarch64-apple-darwin20
Running under: macOS Sequoia 15.5

Matrix products: default
BLAS:   /Library/Frameworks/R.framework/Versions/4.5-arm64/Resources/lib/libRblas.0.dylib 
LAPACK: /Library/Frameworks/R.framework/Versions/4.5-arm64/Resources/lib/libRlapack.dylib;  LAPACK version 3.12.1

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

time zone: Europe/Berlin
tzcode source: internal

attached base packages:
[1] grid      stats     graphics  grDevices utils     datasets  methods  
[8] base     

other attached packages:
 [1] magick_2.8.7           rstatix_0.7.2          parameters_0.26.0     
 [4] ggthemes_5.1.0         waffle_1.0.2           cowplot_1.1.3         
 [7] scales_1.4.0           hms_1.1.3              ggpubr_0.6.0          
[10] viridis_0.6.5          viridisLite_0.4.2      emmeans_1.11.1        
[13] afex_1.4-1             kableExtra_1.4.0       brms_2.22.0           
[16] Rcpp_1.0.14            lsr_0.5.2              sjPlot_2.8.17         
[19] lme4_1.1-37            RColorBrewer_1.1-3     lubridate_1.9.4       
[22] forcats_1.0.0          stringr_1.5.1          dplyr_1.1.4           
[25] purrr_1.0.4            readr_2.1.5            tidyr_1.3.1           
[28] tibble_3.3.0           ggplot2_3.5.2          tidyverse_2.0.0       
[31] BayesFactor_0.9.12-4.7 Matrix_1.7-3           coda_0.19-4.1         
[34] gridExtra_2.3         

loaded via a namespace (and not attached):
 [1] Rdpack_2.6.4         pbapply_1.7-2        rlang_1.1.6         
 [4] magrittr_2.0.3       matrixStats_1.5.0    compiler_4.5.0      
 [7] mgcv_1.9-3           loo_2.8.0            systemfonts_1.2.3   
[10] vctrs_0.6.5          reshape2_1.4.4       crayon_1.5.3        
[13] pkgconfig_2.0.3      fastmap_1.2.0        backports_1.5.0     
[16] labeling_0.4.3       rmarkdown_2.29       tzdb_0.5.0          
[19] nloptr_2.2.1         ragg_1.4.0           bit_4.6.0           
[22] MatrixModels_0.5-4   xfun_0.52            jsonlite_2.0.0      
[25] sjmisc_2.8.10        ggeffects_2.3.0      broom_1.0.8         
[28] parallel_4.5.0       R6_2.6.1             stringi_1.8.7       
[31] extrafontdb_1.0      car_3.1-3            boot_1.3-31         
[34] numDeriv_2016.8-1.1  estimability_1.5.1   knitr_1.50          
[37] extrafont_0.19       bayesplot_1.13.0     splines_4.5.0       
[40] timechange_0.3.0     tidyselect_1.2.1     rstudioapi_0.17.1   
[43] abind_1.4-8          yaml_2.3.10          sjlabelled_1.2.0    
[46] curl_6.4.0           lattice_0.22-7       lmerTest_3.1-3      
[49] plyr_1.8.9           bayestestR_0.16.0    withr_3.0.2         
[52] bridgesampling_1.1-2 posterior_1.6.1      evaluate_1.0.4      
[55] RcppParallel_5.1.10  xml2_1.3.8           pillar_1.10.2       
[58] carData_3.0-5        tensorA_0.36.2.1     DT_0.33             
[61] checkmate_2.3.2      reformulas_0.4.1     insight_1.3.0       
[64] distributional_0.5.0 generics_0.1.4       vroom_1.6.5         
[67] rstantools_2.4.0     minqa_1.2.8          glue_1.8.0          
[70] tools_4.5.0          ggsignif_0.6.4       mvtnorm_1.3-3       
[73] Rttf2pt1_1.3.12      rbibutils_2.3        datawizard_1.1.0    
[76] nlme_3.1-168         performance_0.14.0   Formula_1.2-5       
[79] cli_3.6.5            textshaping_1.0.1    svglite_2.2.1       
[82] Brobdingnag_1.2-9    sjstats_0.19.1       gtable_0.3.6        
[85] digest_0.6.37        htmlwidgets_1.6.4    farver_2.1.2        
[88] htmltools_0.5.8.1    lifecycle_1.0.4      bit64_4.6.0-1       
[91] MASS_7.3-65         

References

Chakroun, K., Mathar, D., Wiehler, A., Ganzer, F., & Peters, J. (2020). Dopaminergic modulation of the exploration/exploitation trade-off in human decision-making. Elife, 9, e51260.
Giron, A. P., Ciranka, S., Schulz, E., Bos, W. van den, Ruggeri, A., Meder, B., & Wu, C. M. (2023). Developmental changes in exploration resemble stochastic optimization. Nature Human Behaviour, 7(11), 1955–1967. https://doi.org/https://doi.org/10.1038/s41562-023-01662-1
Meder, B., Wu, C. M., Schulz, E., & Ruggeri, A. (2021). Development of directed and random exploration in children. Developmental Science, 24(4), e13095. https://doi.org/https://doi.org/10.1111/desc.13095
Schulz, E., Wu, C. M., Ruggeri, A., & Meder, B. (2019). Searching for rewards like a child means less generalization and more directed exploration. Psychological Science, 30(11), 1561–1572. https://doi.org/10.1177/0956797619863663
Wu, C. M., Meder, B., & Schulz, E. (2025). Unifying principles of generalization: Past, present, and future. Annual Review of Psychology, 76, 275–302. https://doi.org/https://doi.org/10.1146/annurev-psych-021524-110810
Wu, C. M., Schulz, E., Garvert, M. M., Meder, B., & Schuck, N. W. (2020). Similarities and differences in spatial and non-spatial cognitive maps. PLOS Computational Biology, 16(9), e1008149. https://doi.org/10.1371/journal.pcbi.1008149
Wu, C. M., Schulz, E., Speekenbrink, M., Nelson, J. D., & Meder, B. (2018). Generalization guides human exploration in vast decision spaces. Nature Human Behaviour, 2, 915–924. https://doi.org/10.1038/s41562-018-0467-4